Turning 3D dreams into reality: integrating AI and motion capture for 3D stories
Immersing themselves in cinematic worlds such as Avatar and Toy Story, audiences are captivated by Avatar’s breathtaking visuals and vast fantasy landscapes, and Toy Story’s playful portrayal of the secret lives of toys. These films inspire aspiring animators to turn their ideas into 3D creations. Yet achieving this dream is far from easy, as 3D animation is an intricate art form that requires extensive technical knowledge. It involves character modelling to create facial expressions and textures, rigging to construct character skeletons, and texturing to add colour and surface details. Reaching proficiency in these skills demands a deep understanding of 3D animation tools, making the process daunting for beginners.
“Many aspiring students lack the technical expertise to bring their ideas to life. To address this, our project integrates AI tools to support 3D animation and streamline the workflow. By lowering technical barriers, both students and educators who are less technologically confident can enjoy the creative experience,” said Mr Mike Chui Hin-leung from the Department of Mathematics and Information Technology (MIT), Principal Investigator of the learning and teaching knowledge-transfer project.
The project, titled Fusing Motion Capture with Generative Artificial Intelligence for Creating Pedagogical 3D Animated Stories, has developed an integrated production pipeline connecting AI-assisted storyboarding, visual ideation, 3D character design, modelling and texturing, scene creation, motion capture (mocap) data generation, audio production, and final animation integration. “This workflow makes the entire 3D animation process more efficient and organised. It allows creators to focus on storytelling and artistic expression rather than spending time mastering complex software,” Mr Chui added.
From a research perspective, while mocap has been widely used in the film industry, it has received little attention in academic research. This is primarily due to the perception that mocap technologies are costly and difficult to implement in educational settings. Factors such as the large space required for a mocap studio, the time-consuming setup of a mocap suit, complex wiring, and the user-unfriendly interface of bundled mocap software discourage research efforts. “This research project is an attempt to help pre-service teachers or non-educational professionals to use mocap technologies and generative AI to make 3D animated stories,” he explained.
As new AI tools enable students to visualise ideas that were difficult to express using older software, we decided to adopt these tools midway through the project.
The project has benefited from the rapid evolution of generative AI in recent years. During the two-year initiative, running from 2024 to 2025, AI tools for generating 3D objects and characters advanced rapidly, with many platforms offering flexible, user-friendly solutions for creating 2D and 3D images. “As new AI tools enable students to visualise ideas that were difficult to express using older software, we decided to adopt these tools midway through the project,” noted Mr Chui.
The growing capability of generative AI has simplified the design process for many first-time creators who lack strong technical drawing or 3D modelling skills. AI tools can transform rough design concepts, such as character poses and appearances, into 2D visual references. “Students can then use AI tools to refine the visuals and convert them into 3D assets. These tools help students turn their ideas into tangible designs, experiment with variations, and make creative decisions earlier in the process,” Mr Chui explained.
Character rigging, which links a digital skeleton to a 3D model to enable movement, is one of the most demanding aspects of animation. “Even minor rigging errors can cause characters to move incorrectly or not at all,” said the project leader. “Through repeated testing, we identified which rigging steps were essential for beginners and which advanced procedures could be simplified or postponed. Streamlining the steps helps reducing unnecessary cognitive load.”
Using AI boosted their confidence and encouraged them to progress to more complex stages of production.
To gather user feedback, the project team invited primary, secondary, and EdUHK students to participate in trial lessons on AI-assisted 3D animation production. Many students described AI as a helpful creative companion. “The AI tools helped them overcome technical challenges, supported their ideas, and reduced frustration. Using AI boosted their confidence and encouraged them to progress to more complex stages of production,” Mr Chui said.
The project team also valued the process, taking a trial-and-error approach that turned every challenge into a learning opportunity. The team experimented with various hardware and software tools to determine the most suitable setups for student use. “We could employ more advanced technology to ensure high reliability. However, this often discourages students, as they may simplify their creative ideas to avoid learning more sophisticated technology. Ultimately, we decided to maintain a balance between creative freedom and technical reliability,” he explained.
“As supervisors, we also had to get down to the finer details,” Mr Chui added. For example, the team found that smooth live motion-capture streaming depends on specific Wi-Fi conditions and channel selection. “The lesson learnt is to assess whether live streaming or pre-recorded data is more appropriate for each classroom environment,” he observed.
The UGC-funded project demonstrated that teachers can boost students’ confidence, creativity, sense of ownership, and acceptance of using AI as AI literacy by prioritising storytelling over technical mastery. Throughout the project, students from EdUHK and local schools recognised the educational benefits of integrating motion capture with generative AI. The initiative promotes ongoing experimentation and refinement, paving the way for practical workflows in future curricula while engaging learners with emerging AI technologies.
“For many young people, creating 3D animated characters is an exciting and rewarding journey. Designing characters with distinct personalities, styles, and stories allows creativity to flourish. The project shows that, by thoughtfully applying AI and structuring the learning process effectively, 3D animation can enhance students’ visual creativity and digital literacy - both crucial skills in an age where images often speak louder than words. It also equips them for future careers in technology and creative industries,” concluded Mr Chui.
While Mr Mike Chui led the project, Dr Gary Chow Chi-ching from the Department of Health and Physical Education (HPE), Dr Fu Hong from MIT and Professor Bill Yeung Chi-ho from the Department of Science and Environmental Studies served as Co-Investigators. Professor Philip Yu Leung-ho from MIT shared insights, and Miss Vendela Lo Wing-tung from the same department provided consistent logistical support throughout the project.






